6,249 research outputs found

    Black Labor at Pine Grove & Caledonia Furnaces, 1789-1860

    Full text link
    Black labor operating under various degrees of freedom found a suitable working environment, if not a safe haven, in several iron forges of South Central Pennsylvania, from the late 1790s through the 1850s. Primary accounts indicate that two in particular, Pine Grove Furnace of Cumberland County, and Caledonia Furnace of Franklin County, harbored runaway slaves to augment their work force. Pine Grove records, dating from 1789 – 1801, specify names of “negro” employees, verifying that black labor coexisted with white, but day books, journals, and ledgers do not denote status.1 Whether they were free men, or slaves rented out by Pennsylvania slave owners, or runaways from the South cannot be gleaned from the day books. All three combinations were possible, especially in the 1790s. Circumstantial evidence suggests that escaped slaves did bolster the ranks of both forges until 1860. With renowned abolitionist Thaddeus Stevens in ownership of Caledonia, and proprietors sympathetic to the same cause at Pine Grove, the environment favored Underground Railroad activity. When this circumstance is coupled with the presence of a Quaker Meeting House in northern Adams County, and the recognition that both forges were within a thirty mile radius of the Maryland slave-state border, then a recipe existed for hide-outs to be employed in area furnaces. [excerpt

    International development in transition

    Get PDF
    This is the pre-peer reviewed version of the following article: HARMAN, S. and WILLIAMS, D. (2014), International development in transition. International Affairs, 90: 925–941. doi: 10.1111/1468-2346.12148, which has been published in final form at http://dx.doi.org/10.1111/1468-2346.12148International development is in a period of transition. While the outcome of this is still unclear, this article argues that there are at least four areas in which the project of international development is changing. First, there is a debate, especially within the World Bank, about development strategy and how we think about development, particularly in terms of the balance between states and markets. This is evident in the debate over state failure and the new structural economics. Second, there is increasing evidence of a shift in lending, away from projects of 'small' human development, perhaps best encapsulated by the United Nations Millennium Development Goals, towards more transformative 'big' development projects such as infrastructure. Third, 'non-traditional' aid donors and new forms of private philanthropy are playing a more significant role in development financing and this, in turn, offers developing countries a new range of choices about what kinds of development assistance they receive. Fourth, aid relations are changing as a result of the renewed agency of developing states, particularly in sub-Saharan Africa, and shifts towards increased South-South cooperation are growing as evidenced by increased funding from regional development banks and increased trade flows. The article reviews these changes and suggests a series of questions and challenges that arise from them for analysts of international development, developing countries and traditional aid donors. © 2014 The Royal Institute of International Affairs

    Search-based amorphous slicing

    Get PDF
    Amorphous slicing is an automated source code extraction technique with applications in many areas of software engineering, including comprehension, reuse, testing and reverse engineering. Algorithms for syntax-preserving slicing are well established, but amorphous slicing is harder because it requires arbitrary transformation; finding good general purpose amorphous slicing algorithms therefore remains as hard as general program transformation. In this paper we show how amorphous slices can be computed using search techniques. The paper presents results from a set of experiments designed to explore the application of genetic algorithms, hill climbing, random search and systematic search to a set of six subject programs. As a benchmark, the results are compared to those from an existing analytical algorithm for amorphous slicing, which was written specifically to perform well with the sorts of program under consideration. The results, while tentative at this stage, do give grounds for optimism. The search techniques proved able to reduce the size of the programs under consideration in all cases, sometimes equaling the performance of the specifically-tailored analytic algorithm. In one case, the search techniques performed better, highlighting a fault in the existing algorith

    Disambiguation strategies for cross-language information retrieval

    Get PDF
    This paper gives an overview of tools and methods for Cross-Language Information Retrieval (CLIR) that are developed within the Twenty-One project. The tools and methods are evaluated with the TREC CLIR task document collection using Dutch queries on the English document base. The main issue addressed here is an evaluation of two approaches to disambiguation. The underlying question is whether a lot of effort should be put in finding the correct translation for each query term before searching, or whether searching with more than one possible translation leads to better results? The experimental study suggests that the quality of search methods is more important than the quality of disambiguation methods. Good retrieval methods are able to disambiguate translated queries implicitly during searching

    Stop-list slicing.

    Get PDF
    Traditional program slicing requires two parameters: a program location and a variable, or perhaps a set of variables, of interest. Stop-list slicing adds a third parameter to the slicing criterion: those variables that are not of interest. This third parameter is called the stoplist. When a variable in the stop-list is encountered, the data-flow dependence analysis of slicing is terminated for that variable. Stop-list slicing further focuses on the computation of interest, while ignoring computations known or determined to be uninteresting. This has the potential to reduce slice size when compared to traditional forms of slicing. In order to assess the size of the reduction obtained via stop-list slicing, the paper reports the results of three empirical evaluations: a large scale empirical study into the maximum slice size reduction that can be achieved when all program variables are on the stop-list; a study on a real program, to determine the reductions that could be obtained in a typical application; and qualitative case-based studies to illustrate stop-list slicing in the small. The large-scale study concerned a suite of 42 programs of approximately 800KLoc in total. Over 600K slices were computed. Using the maximal stoplist reduced the size of the computed slices by about one third on average. The typical program showed a slice size reduction of about one-quarter. The casebased studies indicate that the comprehension effects are worth further consideration

    The Dual Feminisation of HIV/AIDS

    Get PDF
    This is an Accepted Manuscript of an article published by Taylor & Francis in Globalizations on 2011, available online: http://wwww.tandfonline.com/10.1080/14747731.2010.49302

    Mixing and merging for spoken document retrieval

    Get PDF
    This paper describes a number of experiments that explo- red the issues surrounding the retrieval of spoken documents. Two such issues were examined. First, attempting to find the best use of speech recogniser output to produce the highest retrieval effectiveness. Second, investigating the potential problems of retrieving from a so-called "mi- xed collection", i.e. one that contains documents from both a speech recognition system (producing many errors) and from hand transcription (producing presumably near perfect documents). The result of the first part of the work found that merging the transcripts of multiple recognisers showed most promise. The investigation in the second part showed how the term weighting scheme used in a retrieval system was important in determining whether the system was affected detrimentally when retrieving from a mixed collection

    Code extraction algorithms which unify slicing and concept assignment

    Get PDF
    One approach to reverse engineering is to partially automate subcomponent extraction, improvement and subsequent recombination. Two previously proposed automated techniques for supporting this activity are slicing and concept assignment. However, neither is directly applicable in isolation; slicing criteria (sets of program variables) are simply too low level in many cases, while concept assignment typically fails to produce executable subcomponents. This paper introduces a unification of slicing and concept assignment which exploits their combined advantages, while overcoming their individual weaknesses. Our 'concept slices' are extracted using high level criteria, while producing executable subprograms. The paper introduces three ways of combining slicing, and concept assignment and algorithms for each. The application of the concept slicing algorithms is illustrated with a case study from a large financial organisation

    Program simplification as a means of approximating undecidable propositions

    Get PDF
    We describe an approach which mixes testing, slicing, transformation and formal verification to investigate speculative hypotheses concerning a program, formulated during program comprehension activity. Our philosophy is that such hypotheses (which are typically undecidable) can, in some sense, be `answered' by a partly automated system which returns neither `true' nor `false' but a program (the `test program') which computes the answer. The motivation for this philosophy is the way in which, as we demonstrate, static analysis and manipulation technology can be applied to ensure that the resulting test program is significantly simpler than the original program, thereby simplifying the process of investigating the original hypothesi

    Loop squashing transformations for amorphous slicing

    Get PDF
    Program slicing is a source code extraction technique that can be used to support reverse engineering by automatically extracting executable subprograms that preserve some aspect of the original program's semantics. Although minimal slices are not generally computable, safe approximate algorithms can be used to good effect. However, the precision of such slicing algorithms is a major factor in determining the value of slicing for reverse engineering. Amorphous slicing has been proposed as a way of reducing the size of a slice. Amorphous slices preserve the aspect of semantic interest, but not the syntax that denotes it, making them generally smaller than their syntactically restricted counterparts. Amorphous slicing is suitable for many reverse engineering applications, since reverse engineering typically abandons the existing syntax to facilitate structural improvements. Previous work on amorphous slicing has not attempted to exploit its potential to apply loop-squashing transformations. This paper presents an algorithm for amorphous slicing of loops, which identifies induction variables, transformation rule templates and iteration-determining compile-time expressions. The algorithm uses these to squash certain loops into conditional assignments. The paper also presents an inductive proof of the rule templates and illustrates the application of the algorithm with a detailed example of loop squashing
    • 

    corecore